What are Multimodalities made of ? Modeling Output in a Multimodal Dialogue System
نویسنده
چکیده
In order to produce coherent multimodal output a presentation planner in a multimodal dialogue system must have a notion of the types of the multimodalities, which are currently present in the system. More specifically the planner needs information about the multimodal properties and rendering capabilities of the multimodalities. Therefore it is necessary to define an output multimodality model that can properly describe the available renderers in sufficient detail and on the other hand keep a level of abstraction that enables the presentation planner to support a large set of different renderer types. In this paper we present our approach for such a multimodality model.
منابع مشابه
Mobile Multimodal Dialogue Systems
Mobile multimodal dialogue systems allow the user and the system to adapt their choice of input and output modality according to various technical and cognitive resource limitations and the task at hand. We present the multimodal dialogue system SmartKom, that can be used as mobile travel companion for car drivers and pedestrians. SmartKom combines speech, gestures, and facial expressions for i...
متن کاملSpecification and interpretation of multimodal dialogue models for human-robot interaction
In this paper a conceptual model for the design and construction of interactive multimodal systems is presented. This model is based on a representational language for the specification of dialogue models and its associated program interpreter. Dialogues models are domain and modality independent conversational schemes characterizing the dynamic of a multimodal interaction, and domain specific ...
متن کاملSpecification and realisation of multimodal output in dialogue systems
We present a high level formalism for specifying verbal and nonverbal output from a multimodal dialogue system. The output specification is XML-based and provides information about communicative functions of the output without detailing the realisation of these functions. The specification can be used to control an animated character that uses speech and gestures. We give examples from an imple...
متن کاملA Testbed for Evaluating Multimodal Dialogue Systems for Small Screen Devices
This paper discusses the requirements for developing a multimodal spoken dialogue system for mobile phone applications. Since visual output as part of the multimodal system is limited through the restricted screen size of mobile phones, research in the field of information visualisation for small screen devices are discussed and combinations of these techniques with spoken output are sketched. ...
متن کاملIncremental Dialogue Understanding and Feedback for Multiparty, Multimodal Conversation
In order to provide comprehensive listening behavior, virtual humans engaged in dialogue need to incrementally listen, interpret, understand, and react to what someone is saying, in real time, as they are saying it. In this paper, we describe an implemented system for engaging in multiparty dialogue, including incremental understanding and a range of feedback. We present an FML message extensio...
متن کامل